In order to use this notebook, a single production/test web node will need to be bootstrapped w/ ipython and django-shell-plus python libraries. After bootstrapping is complete and while forwarding a local port to the port that the ipython notebook server will be running on the node, you can open the ipython notebook using the token provided in the SSH session after ipython notebook server start.
To bootstrap a specific node for use of this notebook, you'll need to ssh into the node and forward a local port # to localhost:8888 on the node.
e.g. ssh firecares-prod -L 8890:localhost:8888
to forward the local port 8890 to 8888 on the web node, assumes that the "firecares-prod" SSH config is listed w/ the correct webserver IP in your ~/.ssh/config
sudo chown -R firecares: /run/user/1000
as the ubuntu
usersudo su firecares
workon firecares
pip install -r dev_requirements.txt
python manage.py shell_plus --notebook --no-browser --settings=firecares.settings.local
At this point, there will be a mention of "The jupyter notebook is running at: http://localhost:8888/?token=XXXX". Copy the URL, but be sure to use the local port that you're forwarding instead for the connection vs the default of 8888 if necessary.
Since the ipython notebook server supports django-shell-plus, all of the FireCARES models will automatically be imported. From here any command that you execute in the notebook will run on the remote web node immediately.
Useful for when a department's FDID has been corrected. Will do the following:
In [9]:
import psycopg2
from firecares.tasks import update
from firecares.utils import dictfetchall
from django.db import connections
from django.conf import settings
from django.core.management import call_command
from IPython.display import display
import pandas as pd
In [2]:
fd = {'fdid': '18M04', 'state': 'WA'}
nfirs = connections['nfirs']
department = FireDepartment.objects.filter(**fd).first()
fid = department.id
print 'FireCARES id: %s' % fid
print 'https://firecares.org/departments/%s' % fid
In [6]:
%%time
# Get raw fire incident counts (prior to intersection with )
with nfirs.cursor() as cur:
cur.execute("""
select count(1), fdid, state, extract(year from inc_date) as year
from fireincident where fdid=%(fdid)s and state=%(state)s
group by fdid, state, year
order by year""", fd)
fire_years = dictfetchall(cur)
display(fire_years)
print 'Total fires: %s\n' % sum([x['count'] for x in fire_years])
In [7]:
%%time
# Get building fire counts after structure hazard level calculations
sql = update.STRUCTURE_FIRES
print sql
with nfirs.cursor() as cur:
cur.execute(sql, dict(fd, years=tuple([x['year'] for x in fire_years])))
fires_by_hazard_level = dictfetchall(cur)
display(fires_by_hazard_level)
print 'Total geocoded fires: %s\n' % sum([x['count'] for x in fires_by_hazard_level])
In [37]:
sql = """
select alarm, a.inc_type, alarms,ff_death, oth_death, ST_X(geom) as x, st_y(geom) as y, COALESCE(y.risk_category, 'Unknown') as risk_category
from buildingfires a
LEFT JOIN (
SELECT state, fdid, inc_date, inc_no, exp_no, x.geom, x.parcel_id, x.risk_category
FROM (
SELECT * FROM incidentaddress a
LEFT JOIN parcel_risk_category_local using (parcel_id)
) AS x
) AS y
USING (state, fdid, inc_date, inc_no, exp_no)
WHERE a.state = %(state)s and a.fdid = %(fdid)s"""
with nfirs.cursor() as cur:
cur.execute(sql, fd)
rows = dictfetchall(cur)
out_name = '{id}-building-fires.csv'.format(id=fid)
full_path = '/tmp/' + out_name
with open(full_path, 'w') as f:
writer = csv.DictWriter(f, fieldnames=[x.name for x in cur.description])
writer.writeheader()
writer.writerows(rows)
# Push building fires to S3
!aws s3 cp $full_path s3://firecares-test/$out_name --acl="public-read"
In [80]:
update.update_nfirs_counts(fid)
In [20]:
update.calculate_department_census_geom(fid)
In [19]:
# Fire counts by hazard level over all years, keep in mind that the performance score model will currently ONLY work
# hazard levels w/
display(pd.DataFrame(fires_by_hazard_level).groupby(['risk_level']).sum()['count'])
update.update_performance_score(fid)